While there have been a number of remarkable breakthroughs in machine learning (ML), much of the focus has been placed on model development. However, to truly realize the potential of machine learning in real-world settings, additional aspects must be considered across the ML pipeline. Data-centric AI is emerging as a unifying paradigm that could enable such reliable end-to-end pipelines. However, this remains a nascent area with no standardized framework to guide practitioners to the necessary data-centric considerations or to communicate the design of data-centric driven ML systems. To address this gap, we propose DC-Check, an actionable checklist-style framework to elicit data-centric considerations at different stages of the ML pipeline: Data, Training, Testing, and Deployment. This data-centric lens on development aims to promote thoughtfulness and transparency prior to system development. Additionally, we highlight specific data-centric AI challenges and research opportunities. DC-Check is aimed at both practitioners and researchers to guide day-to-day development. As such, to easily engage with and use DC-Check and associated resources, we provide a DC-Check companion website (https://www.vanderschaar-lab.com/dc-check/). The website will also serve as an updated resource as methods and tooling evolve over time.
translated by 谷歌翻译
不确定性量化(UQ)对于创建值得信赖的机器学习模型至关重要。近年来,UQ方法急剧上升,可以标记可疑的例子,但是,通常不清楚这些方法确切地识别出什么。在这项工作中,我们提出了一种假设轻型方法来解释UQ模型本身。我们介绍了混淆密度矩阵 - 基于内核的错误分类密度的近似 - 并使用它将给定UQ方法识别的可疑示例分类为三类:分布外(OOD)示例,边界(BND)(BND)示例和较高分布错误分类(IDM)地区的示例。通过广泛的实验,我们阐明了现有的UQ方法,并表明了模型之间不确定性的原因有所不同。此外,我们展示了建议的框架如何利用分类的示例来提高预测性能。
translated by 谷歌翻译
随着时间的流逝,估计反事实结果有可能通过协助决策者回答“假设”问题来解锁个性化医疗保健。现有的因果推理方法通常考虑观察和治疗决策之间的定期离散时间间隔,因此无法自然地模拟不规则采样的数据,这是实践中的共同环境。为了处理任意观察模式,我们将数据解释为基础连续时间过程中的样本,并建议使用受控微分方程的数学明确地对其潜在轨迹进行建模。这导致了一种新方法,即治疗效果神经控制的微分方程(TE-CDE),该方程可在任何时间点评估潜在的结果。此外,对抗性训练用于调整时间依赖性混杂,这在纵向环境中至关重要,这是常规时间序列中未遇到的额外挑战。为了评估解决此问题的解决方案,我们提出了一个基于肿瘤生长模型的可控仿真环境,以反映出各种临床方案的一系列场景。在所有模拟场景中,TE-CDE始终优于现有方法,并具有不规则采样。
translated by 谷歌翻译
我们对无监督的结构学习感兴趣,特别关注有向的无环图形(DAG)模型。推断这些结构所需的计算通常在变量量中是超指定性的,因为推理需要扫描组合较大的潜在结构空间。也就是说,直到最近允许使用可区分的度量标准搜索此空间,大幅度缩短了搜索时间。尽管该技术(名为Notears)被广泛认为是在DAG-DISCOVERY中的开创性工作,但它承认了一个重要的属性,有利于可怜性:可运输性。在我们的论文中,我们介绍了D型结构,该结构通过新颖的结构和损失功能在发现的结构中恢复可运输性,同时保持完全可区分。由于D型结构仍然可区分,因此可以像以前使用Notears一样轻松地采用我们的方法。在我们的实验中,我们根据边缘准确性和结构锤距离验证了D结构。
translated by 谷歌翻译
提供准确估计其不确定性的计算模型对于与医疗保健环境中的决策相关的风险管理至关重要。尤其如此,因为许多最先进的系统都是使用已自动标记(自我监督模式)并倾向于过度标记的数据训练的。在这项工作中,我们研究了从放射学报告中应用于观察检测问题的一系列当前最新预测模型的不确定性估计质量。对于医疗保健领域中的自然语言处理,此问题仍在研究。我们证明,高斯工艺(GPS)在量化3个不确定性标签的风险方面具有卓越的性能,基于负对数预测概率(NLPP)评估度量和平均最大预测置信度(MMPCL),同时保留强大的预测性能。
translated by 谷歌翻译
数据质量的系统量化对于一致的模型性能至关重要。先前的工作集中在分发数据上。取而代之的是,我们解决了一个研究了一个研究的且同样重要的问题,即表征不协调的区域(ID)数据,这可能是由特征空间异质性引起的。为此,我们提出了使用数据套件的范式转移:一个以数据为中心的AI框架来识别这些区域,而与特定于任务的模型无关。数据套件利用Copula建模,表示学习和共形预测,以基于一组培训实例来构建功能置信区间估计器。这些估计器可用于评估有关培训集的测试实例的一致性,以回答两个实际有用的问题:(1)通过培训培训实例培训的模型可以可靠地预测哪些测试实例? (2)我们可以确定功能空间的不协调区域,以便数据所有者了解数据的局限性还是指导未来数据收集?我们从经验上验证了数据套件的性能和覆盖范围保证,并在跨站点的医疗数据,有偏见的数据以及具有概念漂移的数据上证明,数据套件最能确定下游模型可能是可靠的ID区域(与所述模型无关)。我们还说明了这些确定的区域如何为数据集提供见解并突出其局限性。
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译
Automatic medical image classification is a very important field where the use of AI has the potential to have a real social impact. However, there are still many challenges that act as obstacles to making practically effective solutions. One of those is the fact that most of the medical imaging datasets have a class imbalance problem. This leads to the fact that existing AI techniques, particularly neural network-based deep-learning methodologies, often perform poorly in such scenarios. Thus this makes this area an interesting and active research focus for researchers. In this study, we propose a novel loss function to train neural network models to mitigate this critical issue in this important field. Through rigorous experiments on three independently collected datasets of three different medical imaging domains, we empirically show that our proposed loss function consistently performs well with an improvement between 2%-10% macro f1 when compared to the baseline models. We hope that our work will precipitate new research toward a more generalized approach to medical image classification.
translated by 谷歌翻译
Machine learning models are known to be susceptible to adversarial perturbation. One famous attack is the adversarial patch, a sticker with a particularly crafted pattern that makes the model incorrectly predict the object it is placed on. This attack presents a critical threat to cyber-physical systems that rely on cameras such as autonomous cars. Despite the significance of the problem, conducting research in this setting has been difficult; evaluating attacks and defenses in the real world is exceptionally costly while synthetic data are unrealistic. In this work, we propose the REAP (REalistic Adversarial Patch) benchmark, a digital benchmark that allows the user to evaluate patch attacks on real images, and under real-world conditions. Built on top of the Mapillary Vistas dataset, our benchmark contains over 14,000 traffic signs. Each sign is augmented with a pair of geometric and lighting transformations, which can be used to apply a digitally generated patch realistically onto the sign. Using our benchmark, we perform the first large-scale assessments of adversarial patch attacks under realistic conditions. Our experiments suggest that adversarial patch attacks may present a smaller threat than previously believed and that the success rate of an attack on simpler digital simulations is not predictive of its actual effectiveness in practice. We release our benchmark publicly at https://github.com/wagner-group/reap-benchmark.
translated by 谷歌翻译
We study the problem of profiling news media on the Web with respect to their factuality of reporting and bias. This is an important but under-studied problem related to disinformation and "fake news" detection, but it addresses the issue at a coarser granularity compared to looking at an individual article or an individual claim. This is useful as it allows to profile entire media outlets in advance. Unlike previous work, which has focused primarily on text (e.g.,~on the text of the articles published by the target website, or on the textual description in their social media profiles or in Wikipedia), here our main focus is on modeling the similarity between media outlets based on the overlap of their audience. This is motivated by homophily considerations, i.e.,~the tendency of people to have connections to people with similar interests, which we extend to media, hypothesizing that similar types of media would be read by similar kinds of users. In particular, we propose GREENER (GRaph nEural nEtwork for News mEdia pRofiling), a model that builds a graph of inter-media connections based on their audience overlap, and then uses graph neural networks to represent each medium. We find that such representations are quite useful for predicting the factuality and the bias of news media outlets, yielding improvements over state-of-the-art results reported on two datasets. When augmented with conventionally used representations obtained from news articles, Twitter, YouTube, Facebook, and Wikipedia, prediction accuracy is found to improve by 2.5-27 macro-F1 points for the two tasks.
translated by 谷歌翻译